military ai system
Balancing Power and Ethics: A Framework for Addressing Human Rights Concerns in Military AI
Islam, Mst Rafia, Wasi, Azmine Toushik
AI has made significant strides recently, leading to various applications in both civilian and military sectors. The military sees AI as a solution for developing more effective and faster technologies. While AI offers benefits like improved operational efficiency and precision targeting, it also raises serious ethical and legal concerns, particularly regarding human rights violations. Autonomous weapons that make decisions without human input can threaten the right to life and violate international humanitarian law. To address these issues, we propose a three-stage framework (Design, In Deployment, and During/After Use) for evaluating human rights concerns in the design, deployment, and use of military AI. Each phase includes multiple components that address various concerns specific to that phase, ranging from bias and regulatory issues to violations of International Humanitarian Law. By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.
- Asia > Bangladesh (0.05)
- North America > United States > Virginia (0.05)
- North America > United States > New York (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
An Architecture for a Military AI System with Ethical Rules
Wang, Yetian (University of Waterloo) | Friyia, Daniel (University of Waterloo) | Liu, Kanzhe (University of Waterloo) | Cohen, Robin (University of Waterloo)
The current era of computer science has seen a significant increase in the application of machine learning (ML) and knowledge representation (KR). The problem with the current situation regarding ethics and AI is the weaknesses of ML and KR when used separately. ML will “learn” ethical behaviour as it is observed and may therefore disagree with human morals. On the other hand, KR is too rigid and can only process scenarios that have been predefined. This paper proposes a solution to the question posed by Rossi (2016) “How to combine bottom-up learning approaches with top-down rule-based approaches in defining ethical principles for AI systems?” This system focuses on potential unethical behaviors that are caused by human nature instead of ethical dilemmas caused by technology insufficiency in the wartime scenarios. Our solution is an architecture that combines a classifier to identify targets in wartime scenarios and a rules-based system in the form of ontologies to guide an AI agent’s behaviour in the given circumstance.